7 research outputs found

    The Philosophy of Taking Conspiracy Theories Seriously

    Get PDF
    During the last few decades, the proliferation of interest in conspiracy theories became a widespread phenomenon in our culture, and also in academia. In this piece, I review a new book on the topic of conspiracy theory theory (that is-the theory of conspiracy theories) Taking Conspiracy Theories Seriously, edited by M R. X. Dentith. To contextualize the review, I first turn to the '90s, to see what sparked current interest in conspiracy theories within the field of analytic philosophy. I then critically asses the current limitations of social epistemology, as a field. Among other things, I show how accepted assumptions in social epistemology cause cross-disciplinary disagreements with other social sciences, present the dilemma of trivializing whistle-blowers, and discuss few neglected roles technologies play in belief formation

    Can Artificail Entities Assert?

    Get PDF
    There is an existing debate regarding the view that technological instruments, devices, or machines can assert ‎or testify. A standard view in epistemology is that only humans can testify. However, the notion of quasi-‎testimony acknowledges that technological devices can assert or testify under some conditions, without ‎denying that humans and machines are not the same. Indeed, there are four relevant differences between ‎humans and instruments. First, unlike humans, machine assertion is not imaginative or playful. Second, ‎machine assertion is prescripted and context restricted. As such, computers currently cannot easily switch ‎contexts or make meaningful relevant assertions in contexts for which they were not programmed. Third, ‎while both humans and computers make errors, they do so in different ways. Computers are very sensitive to ‎small errors in input, which may cause them to make big errors in output. Moreover, automatic error control ‎is based on finding irregularities in data without trying to establish whether they make sense. Fourth, ‎testimony is produced by a human with moral worth, while quasi-testimony is not. Ultimately, the notion of ‎quasi-testimony can serve as a bridge between different philosophical fields that deal with instruments and ‎testimony as sources of knowledge, allowing them to converse and agree on a shared description of reality, ‎while maintaining their distinct conceptions and ontological commitments about knowledge, humans, and ‎nonhumans.

    Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust

    Get PDF
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of epistemology is not sufficiently capable of dealing with technologies, and presents a possible solution to this problem. It is argued that knowledge is not only social phenomena, but also a technological one, and that in order to address epistemological issues in technology, we need to carefully depart from traditional epistemic analysis and form a new approach that is technological (termed here Techno-Epistemology). The third and fourth sections engage in an epistemic analysis of trust by dividing it in to various frameworks. The last section argues that these various frameworks of trust can be understood to form a trustworthy large-scale socio-technological system, emphasizing the place of ethical trust as constituting our commitment to give proper accounts for all of the other frameworks

    Trust and Distributed Epistemic Labor‎

    Get PDF
    This chapter explores properties that bind individuals, knowledge, and communities, together. Section ‎‎1 introduces Hardwig’s argument from trust in others’ testimonies as entailing that trust is the glue ‎that binds individuals into communities. Section 2 asks “what grounds trust?” by exploring assessment ‎of collaborators’ explanatory responsiveness, formal indicators such as affiliation and credibility, ‎appreciation of peers’ tacit knowledge, game-theoretical considerations, and the role moral character ‎of peers, social biases, and social values play in grounding trust. Section 3 deals with establishing ‎reliability standards for formation and breaching of trust. Different epistemic considerations and their ‎underpinning of inductive risks are examined through various communication routes within a ‎discipline, between disciplines, and to the public. Section 4 examines whether a collective entity can ‎be trusted over and above trust that is given to its individual members. Section 5 deals with the roles ‎technological artifacts play in distributed research and collective knowledge. It presents the common ‎view in which genuine trust cannot, in principle, be accorded to artifacts, so as an opposite view. We ‎show that what counts as a genuine object of trust is relevant to debates about the boundaries of ‎collective agency and as a criterion for extended cognitive systems.

    Making Sense of the Conceptual Nonsense 'Trustworthy AI'

    Get PDF
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring these criticisms: if the concept of 'Trustworthy AI' is kept being used, we risk attributing responsibilities to agents who cannot be held responsible, and consequently, deteriorate social structures which regard accountability and liability. Nevertheless, despite suggestions to shift the paradigm from 'Trustworthy AI' to 'Reliable AI', I argue that, realistically, this concept will be kept being used. I end by arguing that, ultimately, AI ethics is also about power, social justice, and scholarly activism. Therefore, I propose that community-driven and social justice-oriented ethicists of AI and trust scholars further focus on (a) democratic aspects of trust formation; and (b) draw attention to critical social aspects highlighted by phenomena of distrust. This way, it will be possible to further reveal shifts in power relations, challenge unfair status quos, and suggest meaningful ways to keep the interests of citizens

    A Hebrew Celebration of Philosophy of Technology (Published as: A Brief History of Philosophy of Technology) [Hebrew, Preprint]

    No full text
    A review of Galit Wellner's translation (to Hebrew) of Don Ihde's (2009) "Postphenomenology and Technoscience"
    corecore